Extreme State Aggregation beyond MDPs

نویسنده

  • Marcus Hutter
چکیده

We consider a Reinforcement Learning setup where an agent interacts with an environment in observation-reward-action cycles without any (esp. MDP) assumptions on the environment. State aggregation and more generally feature reinforcement learning is concerned with mapping histories/raw-states to reduced/aggregated states. The idea behind both is that the resulting reduced process (approximately) forms a small stationary finite-state MDP, which can then be efficiently solved or learnt. We considerably generalize existing aggregation results by showing that even if the reduced process is not an MDP, the (q-)value functions and (optimal) policies of an associated MDP with same state-space size solve the original problem, as long as the solution can approximately be represented as a function of the reduced states. This implies an upper bound on the required state space size that holds uniformly for all RL problems. It may also explain why RL algorithms designed for MDPs sometimes perform well beyond MDPs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Pseudometrics for State Aggregation in Average Reward Markov Decision Processes

We consider how state similarity in average reward Markov decision processes (MDPs) may be described by pseudometrics. Introducing the notion of adequate pseudometrics which are well adapted to the structure of the MDP, we show how these may be used for state aggregation. Upper bounds on the loss that may be caused by working on the aggregated instead of the original MDP are given and compared ...

متن کامل

State Aggregation in Monte Carlo Tree Search

Monte Carlo tree search (MCTS) algorithms are a popular approach to online decision-making in Markov decision processes (MDPs). These algorithms can, however, perform poorly in MDPs with high stochastic branching factors. In this paper, we study state aggregation as a way of reducing stochastic branching in tree search. Prior work has studied formal properties of MDP state aggregation in the co...

متن کامل

Transferring State Abstractions Between MDPs

Decision makers that employ state abstraction (or state aggregation) usually find solutions faster by treating groups of states as indistinguishable by ignoring irrelevant state information. Identifying irrelevant information is essential for the field of knowledge transfer where learning takes place in a general setting for multiple domains. We provide a general treatment and algorithm for tra...

متن کامل

A linear programming approach to constrained nonstationary infinite-horizon Markov decision processes

Constrained Markov decision processes (MDPs) are MDPs optimizing an objective function while satisfying additional constraints. We study a class of infinite-horizon constrained MDPs with nonstationary problem data, finite state space, and discounted cost criterion. This problem can equivalently be formulated as a countably infinite linear program (CILP), i.e., a linear program (LP) with a count...

متن کامل

Exact Decomposition Approaches for Markov Decision Processes: A Survey

As classical methods are intractable for solving Markov decision processes MDPs requiring a large state space, decomposition and aggregation techniques are very useful to cope with large problems. These techniques are in general a special case of the classic Divide-and-Conquer framework to split a large, unwieldy problem into smaller components and solving the parts in order to construct the gl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014